Generating Wikipedia by Summarizing Long Sequences

نویسندگان

  • Peter J. Liu
  • Mohammad Saleh
  • Etienne Pot
  • Ben Goodrich
  • Ryan Sepassi
  • Lukasz Kaiser
  • Noam Shazeer
چکیده

We show that generating English Wikipedia articles can be approached as a multidocument summarization of source documents. We use extractive summarization to coarsely identify salient information and a neural abstractive model to generate the article. For the abstractive model, we introduce a decoder-only architecture that can scalably attend to very long sequences, much longer than typical encoderdecoder architectures used in sequence transduction. We show that this model can generate fluent, coherent multi-sentence paragraphs and even whole Wikipedia articles. When given reference documents, we show it can extract relevant factual information as reflected in perplexity, ROUGE scores and human evaluations.

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Generating Image Descriptions Using Dependency Relational Patterns

This paper presents a novel approach to automatic captioning of geo-tagged images by summarizing multiple webdocuments that contain information related to an image’s location. The summarizer is biased by dependency pattern models towards sentences which contain features typically provided for different scene types such as those of churches, bridges, etc. Our results show that summaries biased b...

متن کامل

Summarization of Multi-Document Topic Hierarchies using Submodular Mixtures

We study the problem of summarizing DAG-structured topic hierarchies over a given set of documents. Example applications include automatically generating Wikipedia disambiguation pages for a set of articles, and generating candidate multi-labels for preparing machine learning datasets (e.g., for text classification, functional genomics, and image classification). Unlike previous work, which foc...

متن کامل

Extending DBpedia with List Structures in Wikipedia Articles

Ontologies are the basis of the Semantic Web. Owing to the cost of their construction and maintenance, however, there is much interest in automating their construction. Wikipedia is considered a promising source of knowledge because of its own characteristics. DBpedia extracts a large amount of ontological information from Wikipedia. However, DBpedia focuses exclusively on infoboxes (i.e., tabl...

متن کامل

"The sum of all human knowledge": A systematic review of scholarly research on the content of Wikipedia

Wikipedia might possibly be the best-developed attempt thus far of the enduring quest to gather all human knowledge in one place. Its accomplishments in this regard have made it an irresistible point of inquiry for researchers from various fields of knowledge. A decade of research has thrown light on many aspects of the Wikipedia community, its processes, and content. However, due to the variet...

متن کامل

WikiKreator: Improving Wikipedia Stubs Automatically

Stubs on Wikipedia often lack comprehensive information. The huge cost of editing Wikipedia and the presence of only a limited number of active contributors curb the consistent growth of Wikipedia. In this work, we present WikiKreator, a system that is capable of generating content automatically to improve existing stubs on Wikipedia. The system has two components. First, a text classifier buil...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:
  • CoRR

دوره abs/1801.10198  شماره 

صفحات  -

تاریخ انتشار 2018